Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 4 de 4
1.
BMC Med Educ ; 24(1): 401, 2024 Apr 10.
Article En | MEDLINE | ID: mdl-38600457

BACKGROUND: Artificial intelligence (AI) is becoming increasingly important in healthcare. It is therefore crucial that today's medical students have certain basic AI skills that enable them to use AI applications successfully. These basic skills are often referred to as "AI literacy". Previous research projects that aimed to investigate medical students' AI literacy and attitudes towards AI have not used reliable and validated assessment instruments. METHODS: We used two validated self-assessment scales to measure AI literacy (31 Likert-type items) and attitudes towards AI (5 Likert-type items) at two German medical schools. The scales were distributed to the medical students through an online questionnaire. The final sample consisted of a total of 377 medical students. We conducted a confirmatory factor analysis and calculated the internal consistency of the scales to check whether the scales were sufficiently reliable to be used in our sample. In addition, we calculated t-tests to determine group differences and Pearson's and Kendall's correlation coefficients to examine associations between individual variables. RESULTS: The model fit and internal consistency of the scales were satisfactory. Within the concept of AI literacy, we found that medical students at both medical schools rated their technical understanding of AI significantly lower (MMS1 = 2.85 and MMS2 = 2.50) than their ability to critically appraise (MMS1 = 4.99 and MMS2 = 4.83) or practically use AI (MMS1 = 4.52 and MMS2 = 4.32), which reveals a discrepancy of skills. In addition, female medical students rated their overall AI literacy significantly lower than male medical students, t(217.96) = -3.65, p <.001. Students in both samples seemed to be more accepting of AI than fearful of the technology, t(745.42) = 11.72, p <.001. Furthermore, we discovered a strong positive correlation between AI literacy and positive attitudes towards AI and a weak negative correlation between AI literacy and negative attitudes. Finally, we found that prior AI education and interest in AI is positively correlated with medical students' AI literacy. CONCLUSIONS: Courses to increase the AI literacy of medical students should focus more on technical aspects. There also appears to be a correlation between AI literacy and attitudes towards AI, which should be considered when planning AI courses.


Students, Medical , Humans , Male , Female , Literacy , Cross-Sectional Studies , Artificial Intelligence , Attitude of Health Personnel , Surveys and Questionnaires
2.
Article En | MEDLINE | ID: mdl-38563873

Serious games, as a learning resource, enhance their game character by embedding game design elements that are typically used in entertainment games. Serious games in its entirety have already proven their teaching effectiveness in different educational contexts including medical education. The embedded game design elements play an essential role for a game's effectiveness and thus they should be selected based on evidence-based theories. For game design elements embedded in serious games used for the education of medical and healthcare professions, an overview of theories for the selection lacks. Additionally, it is still unclear whether and how single game design elements affect the learning effectiveness. Therefore, the main aim of this systematic review is threefold. Firstly, light will be shed on the single game design elements used in serious games in this area. Second, the game design elements' underlying theories will be worked out, and third, the game design elements' effectiveness on student learning outcome will be assessed. Two literature searches were conducted in November 2021 and May 2022 in six literature databases with keywords covering the fields of educational game design, serious game, and medical education. Out of 1006 initial records, 91 were included after applying predefined exclusion criteria. Data analysis revealed that the three most common game design elements were points, storyline, and feedback. Only four underlying theories were mentioned, and no study evaluated specific game design elements. Since game design elements should be based on theories to ensure meaningful evaluations, the conceptual GATE framework is introduced, which facilitates the selection of evidence-based game design elements for serious games.

3.
Acad Med ; 99(5): 508-512, 2024 May 01.
Article En | MEDLINE | ID: mdl-38166323

PROBLEM: Creating medical exam questions is time consuming, but well-written questions can be used for test-enhanced learning, which has been shown to have a positive effect on student learning. The automated generation of high-quality questions using large language models (LLMs), such as ChatGPT, would therefore be desirable. However, there are no current studies that compare students' performance on LLM-generated questions to questions developed by humans. APPROACH: The authors compared student performance on questions generated by ChatGPT (LLM questions) with questions created by medical educators (human questions). Two sets of 25 multiple-choice questions (MCQs) were created, each with 5 answer options, 1 of which was correct. The first set of questions was written by an experienced medical educator, and the second set was created by ChatGPT 3.5 after the authors identified learning objectives and extracted some specifications from the human questions. Students answered all questions in random order in a formative paper-and-pencil test that was offered leading up to the final summative neurophysiology exam (summer 2023). For each question, students also indicated whether they thought it had been written by a human or ChatGPT. OUTCOMES: The final data set consisted of 161 participants and 46 MCQs (25 human and 21 LLM questions). There was no statistically significant difference in item difficulty between the 2 question sets, but discriminatory power was statistically significantly higher in human than LLM questions (mean = .36, standard deviation [SD] = .09 vs mean = .24, SD = .14; P = .001). On average, students identified 57% of question sources (human or LLM) correctly. NEXT STEPS: Future research should replicate the study procedure in other contexts (e.g., other medical subjects, semesters, countries, and languages). In addition, the question of whether LLMs are suitable for generating different question types, such as key feature questions, should be investigated.


Educational Measurement , Humans , Educational Measurement/methods , Students, Medical/statistics & numerical data , Education, Medical, Undergraduate/methods , Education, Medical/methods , Language , Female , Male
...